Opakapaka Simulation Report

Author
Affiliation

Megumi Oshima, Marc Nadon, John Syslo,
Hongguang Ma, Felipe Carvalho

Pacific Islands Fisheries Science Center

Published

September 17, 2025

GitHub Repository

1 Introduction

The crimson jobfish (Pristipomoides filamentosus), locally known as ’ōpakapaka, represents one of Hawaii’s most culturally and economically significant deep-water bottomfish species, comprising over 50% of the total Deep-7 bottomfish catch and generating close to $1 million annually in commercial value (Luers et al., 2018). As a slow-growing, long-lived species attaining up to 43 years of age (Andrews et al., 2012) and inhabiting depths of 100-400 meters around the Main Hawaiian Islands (Luers et al., 2018), ’ōpakapaka supports both vital commercial and non-commercial fisheries that are deeply embedded in Hawaiian culture and food security. The species is currently managed under the Western Pacific Regional Fishery Management Council’s Hawaii Fishery Ecosystem Plan as part of the Deep-7 bottomfish complex (Western Pacific Regional Fishery Management Council, 2009).

The current stock assessment model for the Deep-7 complex combines all seven species together in a surplus production model that relies on combined catch and index of abundance data. A single-species model for ’ōpakapaka has been developed for research and to support the Deep 7 assessment but it has not been used for management. Because ’ōpakapaka is so dominate in the fishery, it is one of the few species out of the seven that we can build an integrated, age-structured model for. The benefit of transitioning to an integrated model is that it allows us to incorporate more data such as length or age compositions and species-specific life-history information. However, collecting the information that goes into an integrated assessment can be resource- and time-intensive. Fishery-independent surveys can provide high-quality, non-biased data that reflect the true population dynamics of targeted species, however, they can be expensive to run on an annual basis. But, if too little effort is put into the survey and the quality of the data collected is poor, it is not informative in a stock assessment model. Therefore it is important to optimize the sampling procedures of a survey to provide the most informative data while also balancing the costs and time required.

The Bottomfish Fishery-Independent Survey in Hawai’i (BFISH) uses a stratified-random sampling design to estimate abundance of the Deep 7 bottomfish species, including ’ōpakapaka around the main Hawaiian islands. The survey has been operational since 2016 and historically involved two gear types, a remote stereo-video camera and hook-and-line research fishing. However, due to operational and financial constraints, starting in 2025, the survey only uses the research fishing gear. Sampling is conducted in 500 x 500 m primary sampling unit grids, distributed among the 24 strata types. Catch rate, fish lengths, and life-history information are collected from the individuals caught with research fishing and this data is used to develop an index of relative abundance for each species within the Deep 7 stock and the stock as a whole (#TODO: Ducharme-Barth 2024). The indices of abundance are then used in the stock assessment to inform population trends. Over the years, the number of sampled grids has fluctuated due to logistical or financial constraints. Since the number of grids has a direct impact on the precision of the information derived from the survey, it is crucial to understand the relationship between the number of grids surveyed and the impact that has on quantities estimated from the stock assessment model and used for management. Additionally, understanding the implications of alternative sampling designs may help to design more efficient sampling designs given resource constraints or potential environmental changes.

The objective of this project is to understand how we optimize an operational fishery-independent survey for ’ōpakapaka in the main Hawaiian islands by testing the impact of effort of the survey under two future recruitment conditions on our ability to estimate key stock quantities such as spawning stock biomass and management reference points.

2 Methods

2.1 Simulation Approach

We tested the impact of varying sampling efforts of a fishery-independent survey by simulating abundance trends and data for ’ōpakapaka in the main Hawaiian islands given appropriate process and observational error. We fit the simulated data to an age-structured catch-at-age model and estimated spawning stock biomass, fishing mortality, and management reference points. Both the operating model (OM) and estimation model (EM) were built in Stock Synthesis (Methot and Wetzel (2013)) (SS3) and the simulation-estimation process was implemented using ss3sim (#TODO: Anderson et al 2013, 2014) in the R statistical software environment (#TODO R Core Team 2013). In total, 1,000 iterations were run using the Open Science Grid (#TODO: add citation). Each run involved (1) generating the population dynamics from the OM, with the given F and recruitment deviations timeseries, (2) sampling data with error from the OM, and (3) fitting those data in an EM. For each run, estimated quantities of interest were compared to the “true” values as calculated by the OM. All code for the analysis is available in the GitHub repository.

2.2 Operating Model

The operating model was a single-species, single-sex, age-structured model conditioned on ’ōpakapaka life history parameters and historical fishery data trends from the Deep 7 complex. Historical data included 75 years of catch, CPUE, and size data from commercial and non-commercial fisheries, plus seven years of abundance indices and length composition data from the Bottomfish Fishery-Independent Survey in Hawaii (BFISH). The OM simulated population dynamics and generated data with observation error for a 100-year projection period. Fishing mortality time series were generated based on historical patterns: low values at fishery inception, increasing through the 1970s and remaining high until the late 1990s, then declining in the early 2000s and stabilizing through the end of the time series (Figure 1). Annual fishing mortality values included stochastic variability (standard deviation of 0.002) to introduce realistic variation between iterations. Following the most recent Deep 7 benchmark stock assessment, we applied time-varying ratios for non-commercial catch: 1.94 from 1948-2003 and 1.09 for 2004-2049 (#TODO add citation for deep7 assessment).

Figure 1: Fishing mortality time series for the commercial (blue lines) and non-commercial (orange lines) fisheries.

2.2.1 Recruitment Deviations

The simulation framework was structured to evaluate estimation model performance under varying future conditions. The first 75 years of the 100-year simulation represented the historical period with known data patterns, while the final 25 years constituted “future years” designed to test the robustness of the estimation model against potential changes in recruitment dynamics. Two distinct recruitment scenarios were implemented for these future years to bracket plausible recruitment conditions: (1) a “normal” recruitment scenario that maintained historical mean recruitment levels with no systematic change over time, and (2) a “poor recruitment” scenario that imposed a 40% decline in mean recruitment relative to historical levels. This approach allowed us to assess whether survey sampling intensity affects the estimation model’s ability to detect and respond to recruitment shifts that could significantly impact stock productivity and sustainable harvest levels. A total of 200 recruitment deviation time series were generated, 100 were independent, bias-corrected lognormal random deviates with a standard deviation (\(σ_R\)) of 0.52 for the entire 100 years, and the remaining 100 were the same for the first 75 years and then the last 25 years were calculated to drive a decline in recruitment.

The recruitment decline scenario was structured as a two-phase process spanning 25 years. The first phase consisted of a 10-year gradual decline period, during which recruitment decreased from baseline levels to 60% of the historical mean. The annual decline rate (r) was calculated as: \[ r = (r_{reduced}^(\frac{1}{10})) - 1 \] where rreduced is the proportion by which the stock was reduced. Because of the non-linear relationship between spawning biomass and recruitment, the reduced level did not actually need to be 60%. We used an iterative process to determine what reduced proportion actually resulted in 60% of the mean recruitment (rreduced = 0.71). This formulation ensured a smooth exponential decline reaching the target reduction after 10 years. The second phase maintained recruitment at the reduced level (60% of baseline) for an additional 15 years, representing a new regime of lower recruitment conditions.

The baseline recruitment level was established using the mean recruitment from the most recent 15-year period (279.67 individuals), representing contemporary recruitment conditions prior to the simulated decline.

For the decline phase, annual multipliers were calculated as: \[ M_t = (1 + r)^t \]

where t represents the ten years of the decline. For the lower recruitment regime period (years 11-25), multipliers were set to rreduced, maintaining recruitment at 60% of baseline levels.

Projected recruitment levels were converted to recruitment deviations in log-space to maintain consistency with stock assessment model parameterization. Base recruitment deviations were calculated as:

\[ δ_t = ln(\frac{R_t}{R̄}) - \frac{{σ_R}^2}{2} \]

where Rt represents projected recruitment in year t, is the baseline mean recruitment (279.67), and \(σ_R\) is the recruitment variability parameter (0.52). The bias correction term \((σ_R^2 / 2)\) accounts for the Jensen’s inequality effect when transforming between arithmetic and log scales.

To incorporate realistic recruitment variability around the deterministic decline trend, we generated 100 Monte Carlo iterations of the poor recruitment time series. For each iteration, recruitment deviations were modified by adding random noise drawn from a normal distribution with mean = 0 and standard deviation = 0.15. This additional variability was intentionally set smaller than the base recruitment variability (\(σ_R\) = 0.52) to ensure the underlying decline signal remained detectable above the noise.

The final recruitment deviations for each iteration i were calculated as: \[ δ_{t,i} = δ_t + ε_{t,i} \]

where \(ε_{t,i} \sim N(0, 0.15²)\).

Figure 2: Log recruitment deviations for one iteration used in the normal (blue line) and poor (orange line) recruitment scenarios.

2.2.2 Initial Fishing Mortality and Selectivity

Initial fishing mortality was fixed at 0.012 for the commercial fishery and assumed to start at equilibrium for the non-commercial fishery. Logistic selectivity was modeled for the commercial and non-commercial fisheries and a dome-shaped double normal selectivity model was used for the research fishing survey (Figure 3).

Figure 3: Selectivity functions used for the commercial (blue line), non-commercial (orange line), and survey (red line).

2.2.3 Data Generation

We evaluated five sampling scenarios to assess how different levels of survey effort affect stock assessment performance under each recruitment condition (Table 1). Survey effort was defined by the number of sampling grids covered, which directly influenced the precision of the resulting abundance index and length composition data. Catch, CPUE, and length composition data were simulated from the OM based on the population dynamics generated by the specific F and recruitment deviation time series (Figure 4).

Survey Effort Scenarios
The scenarios ranged from high survey effort (HSE) with 632 grids to extra low survey effort (XLSE) with only 257 grids. As expected, higher effort translated to more precise data: index CVs ranged from 15% (HSE) to 30% (XLSE), and effective sample sizes for length compositions ranged from 60 (HSE) to 15 (XLSE). The no survey effort scenario (NSE) represented a baseline case where stock assessments would rely solely on fishery-dependent data.

Table 1: The observational error used to sample data from the OM for high survey effort (HSE), intermediate survey effort (ISE), low survey effort (LSE), extra low survey effort (XLSE), and no survey effort (NSE). For each survey effort, the number of sampling grids (Grids) that effort would translate to, the resulting index of abundance CV, and effective sample size for length composition data are shown. All sampling effort scenarios were tested under both normal and poor recruitment conditions.
Sampling Effort Grids CV effN
NSE 0 0.00 0
LSE 407 0.24 30
ISE 557 0.18 45
HSE 632 0.15 60
XLSE 257 0.30 15

Data Sources by Scenario
All scenarios had some combination of catch, abundance index, and length composition data available for the EM (Figure 4). For all survey scenarios (HSE through XLSE), commercial CPUE and length data were discontinued after 2023, forcing the estimation model to rely entirely on survey-based abundance and length data for the final 25 years. This design allowed us to isolate the effect of survey data quality on assessment performance. In contrast, the NSE scenario continued using commercial fishery data throughout the entire time series.

Figure 4: Data generated from operating models and used in the estimation models with the survey (a) and the estimation models without the survey (b). The size of each point represents the relative amount of data available per year and fleet (i.e. large points represent years with more data).

Hyperstability in Fishery Data
For the NSE scenario under poor recruitment conditions, we incorporated hyperstability into the commercial CPUE data to reflect realistic fishery behavior during population decline. The CPUE and catch data were modified to not fully reflect the underlying population decline, simulating the tendency of commercial fisheries to maintain catch rates even as fish abundance decreases through improved fishing efficiency, targeting behavior, or fishing in higher-density areas.

Observation Error Specification
Sampling error (variability in data sampled from the OM) and observation error specified in the EM were the same, ensuring no misspecification of uncertainty. The CV and effective sample size parameters remained constant across all years within each scenario (Table 1).

2.3 Estimation Model

The only structural difference in the EM from the OM was that for the poor recruitment condition, a recruitment regime time block was introduced for the last 15 years. Without the regime, the model was unable to fit the persistant decline in recruitment and the relative errors in outputs were over exaggerated. When fitting the model, all parameters were fixed except for \(R_0\), the regime parameter for the poor recruitment scenarios, catchability for the commercial and survey fleets, and selectivity parameters for the commercial fishery and survey fleet. All life history and fishery relationships were fixed at their true value (as specified in the OM).

2.4 Performance Indices

To compare the bias and precision of each sampling strategy, we looked at median relative error (MRE) and median absolute relative error (MARE) between the OM and EM for spawning stock biomass (SSB), fishing mortality (F), age-0 recruitment, and reference points SSB/SSBMSST and F/FMSY.

\[ MRE = median(\frac{EM - OM}{OM}) \]

\[ MARE = median[abs(\frac{EM - OM}{OM})] \]

where EM is the estimated value from the EM and OM is the true value from the OM.

3 Results

Key results

Normal Recruitment Conditions

  • All sampling strategies (including no survey effort) showed similar bias for SSB and F estimates (MRE 2-3.6% for SSB, -2.4% to -3.3% for F)
  • While bias remained similar, uncertainty (error ranges) increased substantially as sampling effort decreased, with XLSE showing -20% to +35% relative error for SSB
  • Terminal year F and F/FMSY estimates showed greater sensitivity to extremely low sampling effort (XLSE) compared to SSB-based metrics
  • Persistent overestimation of recruitment led to optimistic bias in SSB estimates across all scenarios

Poor Recruitment Conditions

  • No survey effort scenario showed dramatically higher error rates (42% MARE for SSB) compared to other scenarios (9-14% MARE)
  • Performance gaps between high and low effort scenarios widened substantially under recruitment failure
  • All models failed to accurately capture poor stock conditions, with most runs showing overly optimistic stock status estimates (interquartile ranges rarely included true values)
  • While NSE captured true F/FMSY values more often, it came with much wider uncertainty ranges, potentially leading to management overreactions

Out of 1,000 EM runs, 8 runs did not converge (maximum gradient was greater than 1e-4). The EM with NSE under poor recruitment conditions had the lowest convergence rate with four iterations that did not converge but overall, model convergence for all sampling scenarios was high.

3.1 Normal Recruitment Conditions

We found that under normal recrutiment conditions, all sampling strategies were similiarly biased (Figure 5) and precise (Figure 6) for SSB. MRE for the terminal year ranged from 2% to 3.6%, and MARE for the terminal year ranged from 5.8% to 10.6%. Even the scenario where there was no survey effort and just the commercial fishery CPUE and length data, the MRE was slightly smaller than the highest effort scenario. This was likely due to the length of the CPUE time series and the effective sample sizes of the length data from the fishery. Uncertainty around SSB, particularly in the last 25 years increased with decreasing sampling effort. The XLSE had the largest uncertainty, ranging between -20% and 35% relative error from the true values. This discrepency can have large impacts on reference point estimation.

Figure 5: Timeseries of median (line) and 95% interval (band) relative error of spawing stock biomass for the high (HSE), intermediate (ISE), low (LSE), extra low (XLSE), and no survey effort (NSE) scenarios under both the normal and poor recruitment conditions.
Figure 6: Timeseries of median (line) and 95% interval (band) absolute relative error of spawing stock biomass for the high (HSE), intermediate (ISE), low (LSE), extra low (XLSE), and no survey effort (NSE) scenarios under both the normal and poor recruitment conditions.

We found that fishing mortality also had similar bias (Figure 7) and precision (Figure 8) for all scenarios, however the bias was negative (estimates were slightly under estimated when compared to the true values). Terminal year F MRE ranged between -2.4% and -3.3%. While MRE for all scenarios was similar for SSB and F, the uncertainty increased as the survey effort decreased, except for the NSE scenario which had similar uncertainty to the HSE scenario. The uncertainty also increased for all scenarios except the NSE scenario in the future years. Again, this was likely due to the exclusion of fishery CPUE and length data for those years and the short overlap of time between the survey and fishery data.

Figure 7: Timeseries of median (line) and 95% interval (band) relative error of fishing mortality for the high (HSE), intermediate (ISE), low (LSE), extra low (XLSE), and no survey effort (NSE) scenarios under both the normal and poor recruitment conditions.
Figure 8: Timeseries of median (line) and 95% interval (band) absolute relative error of fishing mortality for the high (HSE), intermediate (ISE), low (LSE), extra low (XLSE), and no survey effort (NSE) scenarios under both the normal and poor recruitment conditions.

We found that the HSE and ISE best estimated the true age-0 recruitment of the population over the time series and the XLSE estimated the true age-0 recruitment the worst, particularly in the last few years (Figure 9). The persistent over estimation of age-0 recruitment for all models helps to explain the positive bias in SSB over time, particularly in the last few years of the simulation.

Figure 9: Timeseries of median (line) absolute relative error of age-0 recruitment for the high (HSE), intermediate (ISE), low (LSE), extra low (XLSE), and no survey effort (NSE) scenarios under both the normal and poor recruitment conditions.

Under normal recruitment conditions, all models performed similarly when estimating the stock status (SSB/SSBMSST and F/FMSY). The median relative error of SSB/SSBMSST was slightly over-estimated compared to the true status, less than 10% for all model runs. However the range of relative error increased as sampling effort decreased, with the exception of the NSE scenario. The median relative error for all models was slightly underestimated for F/FMSY and the range increased with decreasing sampling effort. We found that terminal year F and F/FMSY is more sensitive to XLSE than terminal year SSB and SSB/SSBMSST. However, the other sampling scenarios seem to be robust to decreasing sampling effort for F and FMSY.

Figure 10: Distirbution of relative error for terminal year SSB, spawning stock biomass at minimum stock size threshold (SSB_MSST), the terminal year stock status (SSB/SSB_MSST), terminal year F, FMSY, and terminal year fishing status (F/FMSY).

3.2 Poor Recruitment Conditions

When tested under poor recruitment conditions, we found that most models performed similarly in terms of bias and precision for SSB (Figure 5 Figure 6) and F (Figure 7 Figure 8), however the MRE and MARE was greater for all models than under normal recruitment scenarios. Terminal SSB MARE ranged from 9.3% to 14% for all models except the NSE. NSE had a much higher error, 42% MARE for SSB. Terminal F MARE ranged from 9.3% to 13% for all models.The MARE for NSE showed an interesting pattern compared to the rest of the sampling scenarios for the F timeseries. MARE started off much higher than the other scenarios but during the period of poor recruitment, the MARE slightly decreased and was similar to the other sampling scenarios. This could be due to _____ #TODO

We found that under poor recruitment conditions, HSE, ISE, and LSE perform similarly to the HSE and ISE models under normal recruitment scenarios (Figure 9). However, XLSE performs slightly worse and NSE performs the worst, with MARE increasing to over 40% in the last few years of the model.

When evaluating the performance of estimating stock status reference points, we found that the HSE and ISE scenarios performs the best for terminal year SSB and SSB/SSBMSSST (Figure 10). The median relative errors are closest to 0 for both quantites and the range of uncertainty is the smallest. NSE performs the worst, with the median relative error around 40% for both terminal year SSB and SSB/SSBMSST. The range of uncertainty becomes notably wider across sampling scenarios, with the largest range under the NSE scenario. It should be noted though that the true terminal year SSB or SSB/SSBMSST is not recovered for the majority of runs under any scenario (the interquartile range does not include 0). This suggests that under poor recruitment conditions, all of the tested models would provide an overly optimistic view of the stock status. A similar pattern held true for terminal year F and F/FMSY. HSE had the smallest range of error and the range become wider as sampling effort decreased, except for the NSE scenario. For fishing mortality based estimates, the NSE scenario performed well. The MRE for both F and F/FMSY was close to 0, and the true F and F/FMSY values were captured within 50% of the runs (0 is included in the interquartile range). However, while that scenario does capture the true value more closely, the trade-off is that the range of error is much wider than any of the other scenarios. This could lead to a greater over estimation of F/FMSY which could have negative impacts on the fishery such as unnecessary reduction in allocations or closures.

4 Discussion

Key points
  • The poor recruitment condition reveals the importance of adequate sampling
  • Management reference points become more uncertain as sampling effort decreases
  • The NSE scenario suggests that it would be adequate in some situations, however, there are a few critical assumptions that would need to be considered before taking this approach
  • During periods of recruitment failure, enhanced rather than reduced monitoring may be necessary to maintain assessment reliability
  • Managers should account for increased uncertainty in stock status during periods of both poor recruitment and reduced survey effort
  • While reduced survey effort may seem economically attractive, the substantial increases in assessment uncertainty could lead to suboptimal management decisions with significant long-term costs

In this simulation, we found that if normal recruitment persists in the future, the sampling effort of the fishery independent survey does not impact our ability to estimate the biomass, fishing mortality, and stock status in general, but it does increase our uncertainty around those estimates. On the other hand, we found that if recruitment were to decline in the future and a regime-type shift occurred, the survey effort impacts our ability to estimate biomass, fishing mortality, and stock status and the uncertainty around those estimates as well. Particularly, if there is no fishery-independent survey, our ability to estimate key quantities deteriorates significantly.

The poor recruitment scenario revealed critical thresholds for maintaining assessment reliability under challenging population conditions. High and intermediate survey effort scenarios (HSE and ISE) demonstrated comparable performance across all metrics, including median relative error (MRE), median absolute relative error (MARE), and the 95% confidence intervals of relative error. Lower effort scenarios (LSE and XLSE) maintained similar central tendency measures but exhibited substantially wider confidence intervals, indicating reduced precision in parameter estimates. This pattern illustrates how poor recruitment conditions amplify the negative consequences of reduced sampling effort, creating a compounding effect where assessment reliability deteriorates precisely when accurate population monitoring becomes most critical for management decisions. The performance degradation observed below intermediate effort levels suggests that survey coverage should be maintained between 550-630 grids to ensure adequate assessment quality for ’ōpakapaka stock management, particularly given the potential for unexpected recruitment variability in this system.

One interesting finding was the stark contrast of the NSE scenario performances under the two recruitment conditions. The NSE scenario’s performance suggests that in some cases, no survey data may be adequate if the fishery CPUE index is accurately tracking the population trends. Having one continuous data source of good quality may be preferable to shorter time series of data or very poor quality survey data. However, this is only true under the normal recruitment conditions and once poor recruitment was introduced the model performed far worse than the others in almost every metric. A major assumption of the NSE model is that the fishery data, particularly the CPUE and size data are accurately characterizing the full population. This is likely untrue, as fishers are knowledgeable about where and when to find larger fish and will likely be able to maintain a healthy catch even if the population is declining. Therefore, we should not assume that the fishery is an accurate, unbaised representation of what is actually happening in the population.

Our simulation relies on several key assumptions that may not fully capture the complexity of real-world fishery dynamics and data collection challenges. The NSE scenario under poor recruitment assumed complete hyperstability in fishery data, meaning catch rates remained unaffected by population decline throughout the 25-year projection period. While hyperstability is a documented phenomenon in fishery-dependent CPUE indices, sustained recruitment failure would likely eventually manifest in commercial catch data, suggesting our model may have overestimated the NSE scenario’s limitations. Nevertheless, this represents a plausible worst-case scenario that highlights the risks of relying solely on fishery-dependent data during periods of stock decline. Additionally, our model simplified future data availability by excluding fishery CPUE and length composition data after 2023, which artificially increased uncertainty even in high survey effort scenarios. In practice, these fishery data streams would continue but would be appropriately weighted within the assessment framework to balance their influence against survey observations. Finally, the poor recruitment scenarios employed reduced variability in recruitment deviations to clearly establish declining trends, whereas natural recruitment variability typically remains high even during periods of average decline. This simplification may have amplified the apparent impacts of recruitment failure, as realistic variability could obscure declining trends and complicate early detection of regime shifts.

While consistent long-term monitoring remains ideal, a combination of adaptive sampling strategies and improved data collection techniques could help balance assessment reliability with resource constraints. An adaptive approach could involve reducing survey effort during periods when recruitment appears stable and population indicators show no signs of decline, then increasing sampling intensity when early warning signals suggest recruitment failure or population stress. However, such adaptive strategies must be implemented cautiously, as maintaining data continuity over extended periods provides irreplaceable value for detecting long-term trends and regime shifts. Complementary improvements to data collection could enhance assessment quality without necessarily increasing survey effort, such as incorporating annual age data from the fishery-independent survey to better understand population structure and recruitment dynamics, or refining fishing methodology to reduce coefficient of variation in catch estimates.

These results highlight several critical considerations for fishery management. The first is that maintaining adequate survey effort is essential for reliable stock assessments, particularly for fishing mortality estimates used in management decisions. The second is that during periods of recruitment failure, enhanced rather than reduced monitoring may be necessary to maintain assessment reliability. We saw a noticeable decrease in our estimation abilities when recruitment was poor and there was a reduced survey effort. Additionally, scientists and managers should account for increased uncertainty in stock status during periods of both poor recruitment and reduced survey effort (even under normal recruitment conditions). Lastly, while reduced survey effort may seem economically attractive, the substantial increases in assessment uncertainty could lead to suboptimal management decisions with significant long-term costs for the fishery.

References

Methot, Richard D., and Chantell R. Wetzel. 2013. “Stock Synthesis: A Biological and Statistical Framework for Fish Stock Assessment and Fishery Management.” Fisheries Research 142 (May): 86–99. https://doi.org/10.1016/J.FISHRES.2012.10.012.